Sensitivity analysis in HMMs with application to likelihood maximization

نویسندگان

  • Pierre-Arnaud Coquelin
  • Romain Deguest
  • Rémi Munos
چکیده

This paper considers a sensitivity analysis in Hidden Markov Models with continuous state and observation spaces. We propose an Infinitesimal Perturbation Analysis (IPA) on the filtering distribution with respect to some parameters of the model. We describe a methodology for using any algorithm that estimates the filtering density, such as Sequential Monte Carlo methods, to design an algorithm that estimates its gradient. The resulting IPA estimator is proven to be asymptotically unbiased, consistent and has computational complexity linear in the number of particles. We consider an application of this analysis to the problem of identifying unknown parameters of the model given a sequence of observations. We derive an IPA estimator for the gradient of the log-likelihood, which may be used in a gradient method for the purpose of likelihood maximization. We illustrate the method with several numerical experiments.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quasi-Newton method for maximum likelihood estimation of hidden Markov models

Hidden Markov models (HMMs) are used in many signal processing applications including speech recognition, blind equalization of digital communications channels, etc. The most widely used method for maximum likelihood estimation of HMM parameters is the forward-backward (or BaumWelch) algorithm which is an early example of application of the Expectation-Maximization (EM) principle. In this contr...

متن کامل

Maximum likelihood and minimum classification error factor analysis for automatic speech recognition

Hidden Markov models (HMMs) for automatic speech recognition rely on high dimensional feature vectors to summarize the short-time properties of speech. Correlations between features can arise when the speech signal is non-stationary or corrupted by noise. We investigate how to model these correlations using factor analysis, a statistical method for dimensionality reduction. Factor analysis uses...

متن کامل

Maximum Likelihood and Minimum Classi cation

Hidden Markov models (HMMs) for automatic speech recognition rely on high dimensional feature vectors to summarize the short-time properties of speech. Correlations between features can arise when the speech signal is non-stationary or corrupted by noise. We investigate how to model these correlations using factor analysis, a statistical method for dimensionality reduction. Factor analysis uses...

متن کامل

Hidden Gauss-Markov models for signal classification

Continuous-state hidden Markov models (CS-HMMs) are developed as a tool for signal classification. Analogs of the Baum, Viterbi, and Baum–Welch algorithms are formulated for this class of models. The CS-HMM algorithms are then specialized to hidden Gauss–Markov models (HGMMs) with linear Gaussian state-transition and output densities. A new Gaussian refactorization lemma is used to show that th...

متن کامل

Discriminative training of HMM using maximum normalized likelihood algorithm

In this paper, we present the Maximum Normalized Likelihood Estimation (MNLE) algorithm and its application for discriminative training of HMMs for continuous speech recognition. The objective of this algorithm is to maximize the normalized frame likelihood of training data. Instead of gradient descent techniques usually applied for objective function optimization in other discriminative algori...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009